Goto

Collaborating Authors

 soft goal


Percassi

AAAI Conferences

Admissible heuristics are essential for optimal planning in the context of search algorithms like A*, and they can also be used in the context of suboptimal planning in order to find quality-bounded solutions. In satisfacing planning, on the other hand, admissible heuristics are not exploited by the best-first search algorithms of existing planners even when a time window is available for improving the first solution found. For example, in the well-know planner LAMA, better solutions within such a time window are sought by restarting a Weighted-A* search guided by inadmissible heuristics, each time a better solution is found. In this paper, we investigate the use of admissible heuristics in the context of LAMA for pruning nodes that cannot lead to better solutions. The revised search of LAMA is experimentally evaluated using two alternative admissible heuristics for pruning and three types of problems: planning with soft goals, planning with action costs, and planning with both action costs and soft goals. Soft goals are compiled into hard goals following the approach of Keyder and Geffner. The empirical results show that the use of admissible heuristics in LAMA can be of great help to improve the planner performance.


Lacerda

AAAI Conferences

We present a methodology for the generation of mobile robot controllers which offer probabilistic time-bounded guarantees on successful task completion, whilst also trying to satisfy soft goals. The approach is based on a stochastic model of the robot's environment and action execution times, a set of soft goals, and a formal task specification in co-safe linear temporal logic, which are analysed using multi-objective model checking techniques for Markov decision processes. For efficiency, we propose a novel two-step approach. First, we explore policies on the Pareto front for minimising expected task execution time whilst optimising the achievement of soft goals. Then, we use this to prune a model with more detailed timing information, yielding a time-dependent policy for which more fine-grained probabilistic guarantees can be provided. We illustrate and evaluate the generation of policies on a delivery task in a care home scenario, where the robot also tries to engage in entertainment activities with the patients.


Multi-Objective Policy Generation for Mobile Robots under Probabilistic Time-Bounded Guarantees

Lacerda, Bruno (School of Computer Science University of Birmingham) | Parker, David (School of Computer Science University of Birmingham) | Hawes, Nick (School of Computer Science University of Birmingham)

AAAI Conferences

We present a methodology for the generation of mobile robot controllers which offer probabilistic time-bounded guarantees on successful task completion, whilst also trying to satisfy soft goals. The approach is based on a stochastic model of the robot’s environment and action execution times, a set of soft goals, and a formal task specification in co-safe linear temporal logic, which are analysed using multi-objective model checking techniques for Markov decision processes. For efficiency, we propose a novel two-step approach. First, we explore policies on the Pareto front for minimising expected task execution time whilst optimising the achievement of soft goals. Then, we use this to prune a model with more detailed timing information, yielding a time-dependent policy for which more fine-grained probabilistic guarantees can be provided. We illustrate and evaluate the generation of policies on a delivery task in a care home scenario, where the robot also tries to engage in entertainment activities with the patients.


Improving Plan Quality through Heuristics for Guiding and Pruning the Search: A Study Using LAMA

Percassi, Francesco (Università degli Studi di Brescia) | Gerevini, Alfonso Emilio (Università degli Studi di Brescia) | Geffner, Hector (Universitat Pompeu Fabra)

AAAI Conferences

Admissible heuristics are essential for optimal planning in the context of search algorithms like A*, and they can also be used in the context of suboptimal planning in order to find quality-bounded solutions. In satisfacing planning, on the other hand, admissible heuristics are not exploited by the best-first search algorithms of existing planners even when a time window is available for improving the first solution found. For example, in the well-know planner LAMA, better solutions within such a time window are sought by restarting a Weighted-A* search guided by inadmissible heuristics, each time a better solution is found. In this paper, we investigate the use of admissible heuristics in the context of LAMA for pruning nodes that cannot lead to better solutions. The revised search of LAMA is experimentally evaluated using two alternative admissible heuristics for pruning and three types of problems: planning with soft goals, planning with action costs, and planning with both action costs and soft goals. Soft goals are compiled into hard goals following the approach of Keyder and Geffner. The empirical results show that the use of admissible heuristics in LAMA can be of great help to improve the planner performance.


Planning with Always Preferences by Compilation into STRIPS with Action Costs

Ceriani, Luca (University of Brescia) | Gerevini, Alfonso Emilio (University of Breascia)

AAAI Conferences

We address planning with always preferences in propositional domains, proposing a new compilation schema for translating a STRIPS problem enriched with always preferences (and possibly also soft goals) into a STRIPS problem with action costs. Our method allows many STRIPS planners to effectively address planning with always preferences and soft goals. An experimental analysis indicates that such basic planners are competitive with current planners using techniques specifically developed to handle always preferences.


Soft Goals Can Be Compiled Away

Keyder, Emil, Geffner, Hector

arXiv.org Artificial Intelligence

Soft goals extend the classical model of planning with a simple model of preferences. The best plans are then not the ones with least cost but the ones with maximum utility, where the utility of a plan is the sum of the utilities of the soft goals achieved minus the plan cost. Finding plans with high utility appears to involve two linked problems: choosing a subset of soft goals to achieve and finding a low-cost plan to achieve them. New search algorithms and heuristics have been developed for planning with soft goals, and a new track has been introduced in the International Planning Competition (IPC) to test their performance. In this note, we show however that these extensions are not needed: soft goals do not increase the expressive power of the basic model of planning with action costs, as they can easily be compiled away. We apply this compilation to the problems of the net-benefit track of the most recent IPC, and show that optimal and satisficing cost-based planners do better on the compiled problems than optimal and satisficing net-benefit planners on the original problems with explicit soft goals. Furthermore, we show that penalties, or negative preferences expressing conditions to avoid, can also be compiled away using a similar idea.


Integrating a Closed World Planner with an Open World Robot: A Case Study

Talamadupula, Kartik (Arizona State University) | Benton, J. (Arizona State University) | Schermerhorn, Paul (Indiana University) | Kambhampati, Subbarao (Arizona State University) | Scheutz, Matthias (Indiana University)

AAAI Conferences

Consider the following problem: a human-robot team is actively In this paper, we explore the issues involved in engineering engaged in an urban search and rescue (USAR) scenario an automated planner to guide a robot towards maximizing inside a building of interest. The robot is placed inside net benefit accompanied with goal achievement in such this building, at the beginning of a long corridor; a sample scenarios. The planning problem that we face involves partial layout is presented in Figure 1. The human team member satisfaction (in that the robot has to weigh the rewards of has intimate knowledge of the building's layout, but is removed the soft goals against the cost of achieving them); it also requires from the scene and can only interact with the robot replanning ability (in that the robot has to modify its via on-board wireless audio communication. The corridor in current plan based on new goals that are added). An additional which the robot is located has doors leading off from either (perhaps more severe) complication is that the planner side into rooms, a fact known to the robot. However, unknown needs to handle goals involving objects whose existence is to the robot (and the human team member) is the possibility not known in the initial state (e.g., the location of the humans that these rooms may contain injured humans (victims).


Soft Goals Can Be Compiled Away

Keyder, E., Geffner, H.

Journal of Artificial Intelligence Research

Soft goals extend the classical model of planning with a simple model of preferences. The best plans are then not the ones with least cost but the ones with maximum utility, where the utility of a plan is the sum of the utilities of the soft goals achieved minus the plan cost. Finding plans with high utility appears to involve two linked problems: choosing a subset of soft goals to achieve and finding a low-cost plan to achieve them. New search algorithms and heuristics have been developed for planning with soft goals, and a new track has been introduced in the International Planning Competition (IPC) to test their performance. In this note, we show however that these extensions are not needed: soft goals do not increase the expressive power of the basic model of planning with action costs, as they can easily be compiled away. We apply this compilation to the problems of the net-benefit track of the most recent IPC, and show that optimal and satisficing cost-based planners do better on the compiled problems than optimal and satisficing net-benefit planners on the original problems with explicit soft goals. Furthermore, we show that penalties, or negative preferences expressing conditions to avoid, can also be compiled away using a similar idea.